Data-driven surrogate models for computational fluid dynamics (CFD) provide a promising way to accelerate flow simulations by several orders of magnitude. Their deployment, however, is often restricted by two primary factors: (1) the reliance on large and computationally demanding neural architectures that are difficult to run on limited hardware, and (2) the lack of explicit enforcement of physical constraints such as mass conservation. In this work, we present a lightweight, physics- guided U-Net designed for steady-state laminar flow prediction. The proposed architecture reduces the parameter count by nearly 60% (approximately 0.7M) compared to a conventional baseline model (around 1.8M parameters) and integrates a physics- informed divergence penalty to promote incompressible flow behavior. A controlled four-model ablation study combining architectural compression and physics-based regularization shows that the divergence term effectively counteracts the accuracy drop typically introduced by compression, while simultaneously improving physical consistency. The final compact model achieves velocity and pressure prediction errors close to the baseline, even when trained on a relatively small dataset of only 300 samples. Overall, the framework supports scalable, efficient, and physically coherent surrogate modeling suitable for real-time CFD applications, digital twins, and edge-based deployment.
Introduction
Computational Fluid Dynamics (CFD) provides detailed insights into fluid behavior but is often computationally expensive, especially for iterative design or optimization tasks. For laminar, steady flows, traditional solvers remain slow despite simplified conditions. Recent machine learning approaches, particularly U-Net and CNN-based surrogate models, offer faster predictions of velocity and pressure fields, but they often require large computational resources and may violate physical constraints like mass conservation.
This study introduces Flow-CUNet, a compact, physics-regularized surrogate network designed for efficient and physically consistent CFD predictions. Key contributions include:
Compressed U-Net architecture reducing parameters by ~60% for faster training and inference.
Physics-aware loss function with divergence regularization to enforce mass conservation.
Efficient training on just 300 high-fidelity ANSYS Fluent samples, enabling strong accuracy and better deployability on limited hardware.
Conclusion
This work introduced Flow-CUNet, a compact and physics- regularized convolutional surrogate model designed for pre- dicting steady incompressible flows. In contrast to earlier studies that rely on synthetic obstacle datasets, the present work evaluated the model using a high-fidelity cylinder-flow benchmark generated through ANSYS Fluent simulations. By compressing the traditional U-Net encoder–decoder structure into a smaller feature space, the proposed architecture reduces the parameter count by nearly 60% while preserving strong predictive capability. Incorporating a divergence-penalty term proved crucial for enhancing physical consistency. The qualitative and quan- titative experiments showed that Flow-CUNet reconstructs velocity and pressure fields with high accuracy, successfully MSE(u, v), MAE(p), Mean Divergence, ParametercCapotunritn(gM)wake development and boundary-layer behavior. Furthermore, divergence comparisons demonstrated that the physics-informed formulation yields velocity fields that are nearly divergence-free, substantially improving mass conser- vation relative to a standard CNN baseline.
The ablation study revealed that compression alone leads to reduced accuracy, while physics-based regularization alone improves stability but does not address efficiency. Their com- bination in Flow-CUNet provides the optimal balance between predictive accuracy, physical fidelity, and computational effi- ciency, making the model suitable for real-time or resource- constrained deployment scenarios. Future work will extend Flow-CUNet to more complex aerodynamic configurations, explore applications to transient flows, and incorporate uncertainty quantification into the train- ing process. Overall, the findings highlight that lightweight, physics-guided neural architectures offer a promising direction for advancing next-generation CFD surrogate modeling.
References
[1] X. Guo, W. Li, and F. Iorio, “Convolutional neural networks for steady flow approximation,” in Proc. ACM SIGKDD, 2016.
[2] F. Milano and A. N. Go´mez, “DeepCFD: Efficient steady-state laminar flow approximation with deep CNNs,” Comput. Methods Appl. Mech. Eng., 2022.
[3] M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks,” J. Comput. Phys., 2019.
[4] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional net- works for biomedical image segmentation,” in MICCAI, 2015.
[5] Z. Han, L. Lu, X. Meng, and G. Karniadakis, “Hybrid physics-informed neural networks for PDEs,” Comput. Methods Appl. Mech. Eng., 2022.
[6] L. Prantl, K. Weissenow, and T. Thuerey, “Generative models for efficient RANS simulations,” PNAS, 2021.
[7] M. Hennigh, “Lat-Net: Compressing Lattice Boltzmann simulations with deep neural networks,” arXiv:1705.09036, 2017.
[8] J. Ling, A. Kurzawski, and J. Templeton, “Reynolds stress modeling using deep neural networks,” J. Fluid Mech., 2016.
[9] R. Maulik, O. Mohan, and D. Livescu, “Neural network representations of turbulence,” Phys. Rev. Fluids, 2020.
[10] X. Jin, S. Pan, Z. Zhu, and Y. Chen, “Prediction of unsteady flows using deep neural networks,” J. Fluid Mech., 2019.
[11] S. Pan and K. Duraisamy, “Data-driven discovery of closure models,” Nat. Commun., 2021.
[12] G. Karniadakis et al., “Physics-informed machine learning,” Nat. Rev. Phys., 2021.
[13] OpenFOAM Foundation, “OpenFOAM User Guide,” Version 9, 2021.
[14] H. K. Versteeg and W. Malalasekera, An Introduction to Computational Fluid Dynamics, Pearson, 2007.
[15] J. Kim and P. Moin, “Application of fractional step methods to incom- pressible Navier–Stokes equations,” J. Comput. Phys., 1985.
[16] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,”arXiv:1412.6980, 2014.
[17] Y. Chen and H. Xu, “Learning steady-state Navier–Stokes solutions using CNNs,” Phys. Fluids, 2021.
[18] L. Lu, X. Meng, Z. Mao, and G. Karniadakis, “DeepXDE: Deep learning for differential equations,” SIAM Review, 2021.
[19] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, MIT Press, 2016.
[20] P. Holl, N. Thuerey, and T. Thuerey, “Learning deep super-resolution for CFD,” in SIGGRAPH, 2020.
[21] C. Brunton and J. Kutz, Data-Driven Science and Engineering, Cam- bridge Univ. Press, 2019.
[22] K. Duraisamy, G. Iaccarino, and H. Xiao, “Turbulence modeling in the age of data-driven learning,” Annu. Rev. Fluid Mech., 2019.
[23] A. Glaws, J. King, and L. Smith, “A survey on data-driven surrogate modeling for CFD,” Prog. Aerosp. Sci., 2023.
[24] T. Sun, J. Zhao, and W. Wang, “Lightweight CNNs for fast fluid prediction,” Eng. Appl. Artif. Intell., 2023.
[25] Z. Mao, A. Dissanayake, and G. Karniadakis, “Deep learning Navier– Stokes with PINNs,” arXiv:2003.02751, 2020.